User loginNavigation |
Prolog vs mini-KanrenThere's an interesting Q&A on Stack Overflow. By Charles Stewart at 2017-03-10 18:37 | login or register to post comments | other blogs | 26046 reads
The complexity of abstract machinesI previously wrote about a brand of research by Guy Blelloch on the Cost semantics for functional languages, which let us make precise claim about the complexity of functional programs without leaving their usual and comfortable programming models (beta-reduction). While the complexity behavior of weak reduction strategies, such as call-by-value and call-by-name, is by now relatively well-understood, the lambda-calculus has a much richer range of reduction strategies, in particular those that can reduce under lambda-abstractions, whose complexity behavior is sensibly more subtle and was, until recently, not very well understood. (This has become a practical concern since the rise in usage of proof assistants that must implement reduction under binders and are very concerned about the complexity of their reduction strategy, which consumes a lot of time during type/proof-checking.) Beniamino Accatoli, who has been co-authoring a lot of work in that area, recently published on arXiv a new paper that has survey quality, and is a good introduction to this area of work and other pointers from the literature.
By gasche at 2017-01-12 01:09 | Lambda Calculus | login or register to post comments | other blogs | 52503 reads
Stroustrup's Rule and Layering Over TimeDave Herman is the voice of the oppressed: syntax is important, contrary to what you have been told! To illustrate he discusses what he calls Stroustrup's Rule:
Do Be Do Be DoMonads and algebraic effects are general concepts that give a definition of what a "side-effect" can be: an instance of monad, or an instance of algebraic effect, is a specific realization of a side-effect. While most programming languages provide a fixed family of built-in side-effects, monads or algebraic effects give a structured way to introduce a new notion of effect as a library. A recent avenue of programming language research is how to locally define several realizations of the same effect interface/signature. There may be several valid notions of "state" or "non-determinism" or "probabilistic choice", and different parts of a program may not require the same realization of those -- a typical use-case would be mocking an interaction with the outside world, for example. Can we let users locally define a new interpretation of an effect, or write code that is generic over the specific interpretation? There are several existing approaches, such as monad transformer stacks, free monads interpreters, monad reification and, lately, effect handlers, as proposed in the programming language Eff. Frank, presented in the publication below, is a new language with user-defined effects that makes effect handling a natural part of basic functional programming, instead of a separate, advanced feature. It is a significant advance in language design, simplifying effectful programming. Functions, called operators, do not just start computing a result from the value of their arguments, they interact with the computation of those arguments by having the opportunity to handle any side-effects arising during their evaluation. It feels like a new programming style, a new calling convention that blends call-by-value and effect handling -- Sam Lindley suggested the name call-by-handling. Frank also proposes a new type-and-effect system that corresponds to this new programming style. Operators handle some of the effects raised by their arguments, and silently forward the rest to the computation context; their argument types indicate which effects they handle. In other words, the static information carried by an argument types is the delta/increment between the effects permitted by the ambient computation and the effects of evaluating this argument. Frank calls this an adjustment over the ambient ability. This delta/increment style results in lightweight types for operators that can be used in different contexts (a form of effect polymorphism) without explicit quantification over effect variables. This design takes part in a research conversation on how to make type-and-effect systems usable, which is the major roadblock for their wider adoption -- Koka and Links are also promising in that regard, and had to explore elaborate conventions to elide their polymorphic variables. Another important part of Frank's type-system design is the explicit separation between values that are and computations that do. Theoretical works have long made this distinction (for example with Call-By-Push-Value), but programmers are offered the dichotomy of either having only effectful expressions or expressing all computations as values (Haskell's indirect style). Frank puts that distinction in the hands of the user -- this is different from distinguishing pure from impure computations, as done in F* or WhyML. If you wish to play with the language, a prototype implementation is available.
Do Be Do Be Do
(arXiv)
Contextual isomorphisms
Contextual Isomorphisms
This paper is based on a very simple idea that everyone familiar with type-systems can enjoy. It then applies it in a technical setting in which it brings a useful contribution. I suspect that many readers will find that second part too technical, but they may still enjoy the idea itself. To facilitate this, I will rephrase the abstract above in a way that I hope makes it accessible to a larger audience. The problem that the paper solves is: how do we know what it means
for two types to be equivalent? For many languages they are reasonable
definitions of equivalence (such that: there exists a bijection
between these two types in the language), but for more advanced
languages these definitions break down. For example, in presence of
hidden mutable state, one can build a pair of functions from the
one-element type To define what it means for two program fragments to be equivalent, we have a "gold standard", which is contextual equivalence: two program fragments are contextually equivalent if we can replace one for the other in any complete program without changing its behavior. This is simple to state, it is usually clear how to instantiate this definition for a new system, and it gives you a satisfying notion of equivalent. It may not be the most convenient one to work with, so people define others, more specific notions of equivalence (typically beta-eta-equivalence or logical relations); it is fine if they are more sophisticated, and their definiton harder to justify or understand, because they can always be compared to this simple definition to gain confidence. The simple idea in the paper above is to use this exact same trick to define what it means for two types to be equivalent. Naively, one could say that two types are equivalent if, in any well-typed program, one can replace some occurrences of the first type by occurrences of the second type, all other things being unchanged. This does not quite work, as changing the types that appear in a program without changing its terms would create ill-typed terms. So instead, the paper proposes that two types are equivalent when we are told how to transform any program using the first type into a program using the second type, in a way that is bijective (invertible) and compositional -- see the paper for details. Then, the author can validate this definition by showing that, when instantiated to languages (simple or complex) where existing notions of equivalence have been proposed, this new notion of equivalence corresponds to the previous notions. (Readers may find that even the warmup part of the paper, namely the sections 1 to 4, pages 1 to 6, are rather dense, with a compactly exposed general idea and arguably a lack of concrete examples that would help understanding. Surely this terseness is in large part a consequence of strict page limits -- conference articles are the tweets of computer science research. A nice side-effect (no pun intended) is that you can observe a carefully chosen formal language at work, designed to expose the setting and perform relevant proofs in minimal space: category theory, and in particular the concept of naturality, is the killer space-saving measure here.) Joe Armstrong Interviews Alan KayYoutube video (via HN) By far not the best presentation of Kay's ideas but surely a must watch for fans. Otherwise, skip until the last third of the interview which might add to what most people here already know. It is interesting that in this talk Kay rather explicitly talks about programming languages as abstraction layers. He also mentions some specifics that may not be as well known as others, yet played a role in his trajectory, such as META.
I fully sympathize with his irritation with the lack of attention to and appreciation of fundamental concepts and theoretical frameworks in CS. On the other hand, I find his allusions to biology unconvincing. Polymorphism, subtyping and type inference in MLsubI am very enthusiastic about the following paper: it brings new ideas and solves a problem that I did not expect to be solvable, namely usable type inference when both polymorphism and subtyping are implicit. (By "usable" here I mean that the inferred types are both compact and principal, while previous work generally had only one of those properties.)
Polymorphism, Subtyping, and Type Inference in MLsub
The paper is full of interesting ideas. For example, one idea is that adding type variables to the base grammar of types -- instead of defining them by their substitution -- forces us to look at our type systems in ways that are more open to extension with new features. I would recommend looking at this paper even if you are interested in ML and type inference, but not subtyping, or in polymorphism and subtyping, but not type inference, or in subtyping and type inference, but not functional languages. This paper is also a teaser for the first's author PhD thesis, Algebraic Subtyping. There is also an implementation available. (If you are looking for interesting work on inference of polymorphism and subtyping in object-oriented languages, I would recommend Getting F-Bounded Polymorphism into Shape by Ben Greenman, Fabian Muehlboeck and Ross Tate, 2014.) Salon des Refusés -- Dialectics for new computer scienceIn the first decade of the twenty-first century, the Feyerabend Project organized several workshops to discuss and develop new ways to think of programming languages and computing in general. A new event in this direction is a new workshop that will take place in Brussels, in April, co-located with the new <Programming> conference -- also worth a look. Salon des Refusés -- Dialectics for new computer science
The workshop's webpage also contains descriptions of of some formats that could "make it possible to think about programming in a new way", including: Thought experiments, Experimentation, Paradigms, Metaphors, myths and analogies, and From jokes to science fiction. For writings on similar questions about formalism, paradigms or method in programming language research, see Richard Gabriel's work, especially The Structure of a Programming Language Revolution (2012) and Writers’ Workshops As Scientific Methodology (?)), Thomas Petricek's work, especially Against a Universal Definition of 'Type' (2015) and Programming language theory Thinking the unthinkable (2016)), and Jonathan Edwards' blog: Alarming Development. For programs of events of similar inspiration in the past, you may be interested in the Future of Programming workshops: program of 2014, program of September 2015, program of October 2015. Other events that are somewhat similar in spirit -- but maybe less radical in content -- are Onward!, NOOL and OBT. Proving Programs Correct Using Plain Old Java TypesProving Programs Correct Using Plain Old Java Types, by Radha Jagadeesan, Alan Jeffrey, Corin Pitcher, James Riely:
Not a new paper, but it provides a lightweight verification technique for some program properties that you can use right now, without waiting for integrated theorem provers or SMT solvers. Properties that require only monotone typestates can be verified, ie. those that operations can only move the typestate "forwards". In order to achieve this, they require programmers to follow a few simple rules to avoid Java's pervasive nulls. These are roughly: don't assign null explicitly, be sure to initialize all fields when constructing objects. Automating Ad hoc Data Representation TransformationsAutomating Ad hoc Data Representation Transformations by Vlad Ureche, Aggelos Biboudis, Yannis Smaragdakis, and Martin Odersky:
This is a realization of an idea that has been briefly discussed here on LtU a few times, whereby a program is written using high-level representations, and the user has the option to provide a lowering to a more efficient representation after the fact. This contrasts with the typical approach of providing efficient primitives, like primitive unboxed values, and leaving it to the programmer to compose them efficiently up front. By naasking at 2016-09-22 18:29 | Functional | General | Object-Functional | OOP | Software Engineering | 3 comments | other blogs | 47748 reads
|
Browse archives
Active forum topics |
Recent comments
23 weeks 1 day ago
23 weeks 1 day ago
23 weeks 1 day ago
45 weeks 3 days ago
49 weeks 4 days ago
51 weeks 2 days ago
51 weeks 2 days ago
1 year 1 week ago
1 year 6 weeks ago
1 year 6 weeks ago